mesh topology
Performance Analysis of Decentralized Federated Learning Deployments
Jiang, Chengyan, Fan, Jiamin, Halabi, Talal, Haque, Israat
The widespread adoption of smartphones and smart wearable devices has led to the widespread use of Centralized Federated Learning (CFL) for training powerful machine learning models while preserving data privacy. However, CFL faces limitations due to its overreliance on a central server, which impacts latency and system robustness. Decentralized Federated Learning (DFL) is introduced to address these challenges. It facilitates direct collaboration among participating devices without relying on a central server. Each device can independently connect with other devices and share model parameters. This work explores crucial factors influencing the convergence and generalization capacity of DFL models, emphasizing network topologies, non-IID data distribution, and training strategies. We first derive the convergence rate of different DFL model deployment strategies. Then, we comprehensively analyze various network topologies (e.g., linear, ring, star, and mesh) with different degrees of non-IID data and evaluate them over widely adopted machine learning models (e.g., classical, deep neural networks, and Large Language Models) and real-world datasets. The results reveal that models converge to the optimal one for IID data. However, the convergence rate is inversely proportional to the degree of non-IID data distribution. Our findings will serve as valuable guidelines for designing effective DFL model deployments in practical applications.
- North America > United States > Washington (0.04)
- North America > Canada > British Columbia > Vancouver Island > Capital Regional District > Victoria (0.04)
- Information Technology > Security & Privacy (0.54)
- Information Technology > Hardware (0.34)
Reducing the Sensitivity of Neural Physics Simulators to Mesh Topology via Pretraining
Vaska, Nathan, Goodwin, Justin, Walters, Robin, Caceres, Rajmonda S.
Meshes are used to represent complex objects in high fidelity physics simulators across a variety of domains, such as radar sensing and aerodynamics. There is growing interest in using neural networks to accelerate physics simulations, and also a growing body of work on applying neural networks directly to irregular mesh data. Since multiple mesh topologies can represent the same object, mesh augmentation is typically required to handle topological variation when training neural networks. Due to the sensitivity of physics simulators to small changes in mesh shape, it is challenging to use these augmentations when training neural network-based physics simulators. In this work, we show that variations in mesh topology can significantly reduce the performance of neural network simulators. We evaluate whether pretraining can be used to address this issue, and find that employing an established autoencoder pretraining technique with graph embedding models reduces the sensitivity of neural network simulators to variations in mesh topology. Finally, we highlight future research directions that may further reduce neural simulator sensitivity to mesh topology.
- Government > Regional Government (0.47)
- Government > Military (0.46)
UNet: A Generic and Reliable Multi-UAV Communication and Networking Architecture for Heterogeneous Applications
Roy, Sanku Kumar, Samshad, Mohamed, Rajawat, Ketan
The rapid growth of UAV applications necessitates a robust communication and networking architecture capable of addressing the diverse requirements of various applications concurrently, rather than relying on application-specific solutions. This paper proposes a generic and reliable multi-UAV communication and networking architecture designed to support the varying demands of heterogeneous applications, including short-range and long-range communication, star and mesh topologies, different data rates, and multiple wireless standards. Our architecture accommodates both adhoc and infrastructure networks, ensuring seamless connectivity throughout the network. Additionally, we present the design of a multi-protocol UAV gateway that enables interoperability among various communication protocols. Furthermore, we introduce a data processing and service layer framework with a graphical user interface of a ground control station that facilitates remote control and monitoring from any location at any time. We practically implemented the proposed architecture and evaluated its performance using different metrics, demonstrating its effectiveness.
- Asia > India > Uttar Pradesh > Kanpur (0.04)
- North America > Canada > Alberta > Census Division No. 11 > Edmonton Metropolitan Region > Edmonton (0.04)
- Asia > China (0.04)
- Information Technology (1.00)
- Telecommunications > Networks (0.68)
- Transportation > Air (0.47)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
Federated Learning Architectures: A Performance Evaluation with Crop Yield Prediction Application
Mukherjee, Anwesha, Buyya, Rajkumar
Federated learning has become an emerging technology for data analysis for IoT applications. This paper implements centralized and decentralized federated learning frameworks for crop yield prediction based on Long Short-Term Memory Network. For centralized federated learning, multiple clients and one server is considered, where the clients exchange their model updates with the server that works as the aggregator to build the global model. For the decentralized framework, a collaborative network is formed among the devices either using ring topology or using mesh topology. In this network, each device receives model updates from the neighbour devices, and performs aggregation to build the upgraded model. The performance of the centralized and decentralized federated learning frameworks are evaluated in terms of prediction accuracy, precision, recall, F1-Score, and training time. The experimental results present that $\geq$97% and $>$97.5% prediction accuracy are achieved using the centralized and decentralized federated learning-based frameworks respectively. The results also show that the using centralized federated learning the response time can be reduced by $\sim$75% than the cloud-only framework. Finally, the future research directions of the use of federated learning in crop yield prediction are explored in this paper.
Adapteva's 1,024-core Epiphany V mega-chip packs a serious wallop
Back in 2010, an Intel researcher said 1,000-core processors would be feasible. We're in that era, and the race to make chips faster and more power efficient is gaining steam. The latest mega-chip is a 1,024-core processor called Epiphany V, which was announced by Adapteva on Wednesday. Adapteva claims it will have enough juice to outperform some of the latest gaming and server processors. It has a mere 24 more cores than the 1,000-core KiloCore, a test chip made by researchers at University of California, Davis.
- North America > United States > California > Yolo County > Davis (0.25)
- Asia > Taiwan (0.05)
Adapteva's 1,024-core Epiphany V mega-chip packs serious wallop
Back in 2010, an Intel researcher said 1,000-core processors would be feasible. We're in that era, and the race to make chips faster and more power efficient is gaining steam. The latest mega-chip is a 1,024-core processor called Epiphany V, which was announced by Adapteva on Wednesday. Adapteva claims it will have enough juice to outperform some of the latest gaming and server processors. It has a mere 24 more cores than the 1,000-core KiloCore, a test chip made by researchers at University of California, Davis.
- North America > United States > California > Yolo County > Davis (0.25)
- Asia > Taiwan (0.05)